Tunnel ventilation control via an actor-critic algorithm employing nonparametric policy gradients
نویسندگان
چکیده
The appropriate operation of a tunnel ventilation system provides drivers passing through the tunnel with comfortable and safe driving conditions. Tunnel ventilation involves maintaining CO pollutant concentration and VI (visibility index) under an adequate level with operating highly energy-consuming facilities such as jet-fans. Therefore, it is significant to have an efficient operating algorithm in aspects of a safe driving environment as well as saving energy. In this research, a reinforcement learning (RL) method based on the actor-critic architecture and nonparametric policy gradients is applied as the control algorithm. The two objectives listed above, maintaining an adequate level of pollutants and minimizing power consumption, are included into a reward formulation that is a performance index to be maximized in the RL methodology. In this paper, a nonparametric approach is adopted as a promising route to perform a rigorous gradient search in a function space of policies to improve the efficacy of the actor module. Extensive simulation studies performed with real data collected from an existing tunnel system confirm that with the suggested algorithm, the control purposes were well accomplished and improved when compared to a previously developed RLbased control algorithm.
منابع مشابه
Natural Actor-Critic
This paper investigates a novel model-free reinforcement learning architecture, the Natural Actor-Critic. The actor updates are based on stochastic policy gradients employing Amari’s natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural p...
متن کاملApplying the Episodic Natural Actor-Critic Architecture to Motor Primitive Learning
In this paper, we investigate motor primitive learning with the Natural Actor-Critic approach. The Natural Actor-Critic consists out of actor updates which are achieved using natural stochastic policy gradients while the critic obtains the natural policy gradient by linear regression. We show that this architecture can be used to learn the “building blocks of movement generation”, called motor ...
متن کاملReinforcement Learning by Value Gradients
The concept of the value-gradient is introduced and developed in the context of reinforcement learning, for deterministic episodic control problems that use a function approximator and have a continuous state space. It is shown that by learning the valuegradients, instead of just the values themselves, exploration or stochastic behaviour is no longer needed to find locally optimal trajectories....
متن کامل1 Supervised Actor - Critic Reinforcement Learning
Editor’s Summary: Chapter ?? introduced policy gradients as a way to improve on stochastic search of the policy space when learning. This chapter presents supervised actor-critic reinforcement learning as another method for improving the effectiveness of learning. With this approach, a supervisor adds structure to a learning problem and supervised learning makes that structure part of an actor-...
متن کاملRevisiting stochastic off-policy action-value gradients
Off-policy stochastic actor-critic methods rely on approximating the stochastic policy gradient in order to derive an optimal policy. One may also derive the optimal policy by approximating the action-value gradient. The use of action-value gradients is desirable as policy improvement occurs along the direction of steepest ascent. This has been studied extensively within the context of natural ...
متن کامل